Exacerbating Algorithmic Bias through Fairness Attacks
نویسندگان
چکیده
Algorithmic fairness has attracted significant attention in recent years, with many quantitative measures suggested for characterizing the of different machine learning algorithms. Despite this interest, robustness those respect to an intentional adversarial attack not been properly addressed. Indeed, most focused on impact malicious attacks accuracy system, without any regard system's fairness. We propose new types data poisoning where adversary intentionally targets a system. Specifically, we two families that target measures. In anchoring attack, skew decision boundary by placing poisoned points near specific bias outcome. influence fairness, aim maximize covariance between sensitive attributes and outcome affect model. conduct extensive experiments indicate effectiveness our proposed attacks.
منابع مشابه
Fairness and Bias
Fairness is a social rather than a psychometric concept. Its definition depends on what one considers to be fair. Fairness has no single meaning and, therefore, no single definition, whether statistical, psychometric, or social. The Standards notes four possible meanings of “fairness.” The first meaning views fairness as requiring equal group outcomes (e.g., equal passing rates for subgroups of...
متن کاملAlgorithmic Bias in Autonomous Systems
Algorithms play a key role in the functioning of autonomous systems, and so concerns have periodically been raised about the possibility of algorithmic bias. However, debates in this area have been hampered by different meanings and uses of the term, “bias.” It is sometimes used as a purely descriptive term, sometimes as a pejorative term, and such variations can promote confusion and hamper di...
متن کاملAlgorithmic Tamper-Proof Security under Probing Attacks
Gennaro et al. initiated the study of algorithmic tamper proof (ATP) cryptography: cryptographic hardware that remains secure even in the presence of an adversary who can tamper with the memory content of a hardware device. In this paper, we solve an open problem stated in their paper, and also consider whether a device can be secured against an adversary who can both tamper with its memory and...
متن کاملDenial of Service via Algorithmic Complexity Attacks
We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications’ data structures. Frequently used data structures have “average-case” expected running time that’s far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an atta...
متن کاملFairness analysis through priority
We report on the extension of the CSP-based refinement checker FDR to encompass a prioritisation operator as envisaged in [23]. This is embedded into the tool using similar technology to the well-known chase operator. We show how it can be used to analyse systems under what we term unstable failures, in which the usual notion of failure is augmented by a fair notion of acceptance along what wou...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i10.17080